Search Results: "cbf"

3 November 2016

Norbert Preining: Debian/TeX Live 2016.20161103-1

This month s update falls onto a national holiday in Japan. My recent start as a normal company employee in Japan doesn t leave me enough time during normal days to work on Debian, so things have to wait for holidays. There have been a few notable changes in the current packages, and above all I wanted to fix an RC bug and on the way fixed also several other (sometimes rather old) bugs.
texlive2016-debian From the list of new packages I want to pick apxproof: I have written something myself for one of my rather long papers (with proofs about 60pp), where at times I had to factor out the proofs into an appendix. I did this my own way, but I would have preferred to have a nice package! Another interesting change is the upstream merge of collection-mathextra (which translated to the Debian package texlive-math-extra) and collection-science (Debian: texlive-science) into a new collection collection-mathscience. Since introducing new packages and phasing out old ones is generally a pain in Debian, I decided to digress from the upstream naming convention and use texlive-science for the new collection-mathscience. In the end Mathematics is the most important science of all  Finally also a word about removals: Several ConTeXt packages have been removed due to the fact that they are outdated. These removals will find their way in an update of the Debian ConTeXt package in near future. The TeX Live packages lost voss-mathmode, which was retracted by the author due to various reasons. He is working on an updated version that will hopeful reappear in both TeX Live and Debian in near future. Well, that s it for now. Here now the full list with links. Enjoy. New packages apxproof, bangorexam, biblatex-gb7714-2015, biblatex-lni, biblatex-sbl, context-cmscbf, context-cmttbf, context-inifile, context-layout, delimset, latex2nemeth, latexbangla, latex-papersize, ling-macros, notex-bst, platex-tools, testidx, uppunctlm, wtref, xcolor-material. Removed packages voss-mathmode. Updated packages apa6, autoaligne, babel-german, biblatex-abnt, biblatex-anonymous, biblatex-apa, biblatex-manuscripts-philology, biblatex-nature, biblatex-realauthor, bibtex, bidi, boondox, bxcjkjatype, chickenize, churchslavonic, cjk-gs-integrate, context-filter, cooking-units, ctex, denisbdoc, dvips, europasscv, fixme, glossaries, gzt, handout, imakeidx, ipaex-type1, jsclasses, jslectureplanner, kpathsea, l3build, l3experimental, l3kernel, l3packages, latexindent, latexmk, listofitems, luatexja, marginnote, mcf2graph, minted, multirow, nameauth, newpx, newtx, noto, nucleardata, optidef, overlays, pdflatexpicscale, pst-eucl, reledmac, repere, scanpages, semantic-markup, tableaux, tcolorbox, tetex, texlive-scripts, ticket, todonotes, tracklang, tudscr, turabian-formatting, updmap-map, uspace, visualtikz, xassoccnt, xecjk, yathesis.

31 October 2016

Chris Lamb: Free software activities in October 2016

Here is my monthly update covering what I have been doing in the free software world (previously):

Debian & Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most GNU/Linux distributions provide binary (or "compiled") packages to end users. The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced either maliciously and accidentally during this compilation process by promising identical binary packages are always generated from a given source.

  • Presented a talk entitled "Reproducible Builds" talk at Software Freedom Kosova, in Prishtina, Republic of Kosovo.

  • I filed my 2,500th bug in the Debian BTS: #840972: golang-google-appengine: accesses the internet during build.

  • In order to build packages reproducibly, one not only needs identical sources but also some external and sharable definition of the environment used for a particular build, stipulating such things such as the version numbers of the required build-dependencies. It is not currently clear how to handle these .buildinfo files after the archive software has processed them and how to make them available to the world so I started development on a proof-of-concept server to see what issues arise in practice. It is available at buildinfo.debian.net.

  • Chaired an IRC meeting and ran a poll to determine a regular time .

  • Submitted two design proposals to our wiki page.

  • Improvements to our tests.reproducible-builds.org testing framework:

    • Move regular "Scheduled in..." messages to the #debian-reproducible-changes IRC channel.
    • Use our log_info method instead of manual echo calls.
    • Correct an "all sources packages" "all source packages" typo.
    • Submit .buildinfo files to buildinfo.debian.net.
    • Create GPG key on nodes for buildinfo.debian.net at deploy time, not "lazily".

My work in the Reproducible Builds project was also covered in our weekly reports. (#75, #76, #77 & #78).

I also submitted 14 patches to fix specific reproducibility issues in bio-eagle, cf-python, fastx-toolkit, fpga-icestorm, http-icons, lambda-align, mypy, playitslowly, seabios, stumpwm, sympa, tj3, wims-help & xotcl.
Debian LTS

This month I have been paid to work 13 hours on Debian Long Term Support (LTS). In that time I did the following:
  • Seven days of "frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 647-1 for freeimage correcting an out-of-bounds write vulnerability in the XMP image handling functionality.
  • Issued DLA 649-1 for python-django fixing a possible CSRF protection bypass on sites that use Google Analytics.
  • Issued DLA 654-1 for libxfixes preventing an integer overflow when a malicious client sent INT_MAX as a "length".
  • Issued DLA 662-1 for quagga correcting a programming error where two constants were confused that could cause stack overrun in IPv6 routing code.
  • Issued DLA 688-1 for cairo to prevent a DoS attack where a malicious SVG could generate invalid pointers.

Uploads
  • gunicorn:
    • 19.6.0-7 Set supplementary groups when changing uid, add an example systemd .service file to gunicorn-examples, and expand README.Debian to make it clearer what to do now that /etc/gunicorn.d has been removed.
    • 19.6.0-8 Correct previous supplementary groups patch to be compatible with Python 3.
  • redis:
    • 3:3.2.4-2 Ensure that sentinel's configuration actually writes to a pidfile location so that systemd can detect that the daemon has started.
    • 3:3.2.5-1 New upstream release.
  • libfiu:
    • 0.94-8 Fix FTBFS under Bash due to lack of && in debian/rules.
    • 0.94-9 Ensure the build is reproducible by sorting injected modules.
  • aptfs (2:0.8-2) Minor cosmetic changes.

NMUs
  • libxml-dumper-perl (0.81-1.2) Move away from a unsupported debhelper compat level 4.
  • netatalk (2.2.5-1.1) Drop build-dependency on hardening-includes.

QA uploads
  • anon-proxy (00.05.38+20081230-4) Move to a supported debhelper compatibility level 9.
  • ara (1.0.32) Make the build reproducible.
  • binutils-m68hc1x (1:2.18-8) Make the build reproducible & move to a supported debhelper compatibility level.
  • fracplanet (0.4.0-5) Make the build reproducible.
  • libnss-ldap (265-5) Make the build reproducible.
  • python-uniconvertor (1.1.5-3) Fix an "option release requires an argument" FTBFS. (#839375)
  • ripole (0.2.0+20081101.0215-3) Actually include the ripole binary in package. (#839919) & enable hardening flags.
  • twitter-bootstrap (2.0.2+dfsg-10) Fix incorrect copyright formatting when building under Bash. (#824592)
  • zpaq (1.10-3) Make the build reproducible.


Debian FTP Team

As a Debian FTP assistant I ACCEPTed 147 packages: ace-link, amazon-s2n, avy, basez, bootstrap-vz, bucklespring, camitk, carettah, cf-python, debian-reference, dfcgen-gtk, efivar, entropybroker, fakesleep, gall, game-data-packager, gitano, glare, gnome-panel, gnome-shell-extension-dashtodock, gnome-shell-extension-refreshwifi, gnome-shell-extension-remove-dropdown-arrows, golang-github-gogits-go-gogs-client, golang-github-gucumber-gucumber, golang-github-hlandau-buildinfo, golang-github-hlandau-dexlogconfig, golang-github-hlandau-goutils, golang-github-influxdata-toml, golang-github-jacobsa-crypto, golang-github-kjk-lzma, golang-github-miekg-dns, golang-github-minio-sha256-simd, golang-github-nfnt-resize, golang-github-nicksnyder-go-i18n, golang-github-pointlander-compress, golang-github-pointlander-jetset, golang-github-pointlander-peg, golang-github-rfjakob-eme, golang-github-thecreeper-go-notify, golang-github-twstrike-gotk3adapter, golang-github-unknwon-goconfig, golang-gopkg-dancannon-gorethink.v1, golang-petname, haskell-argon2, haskell-binary-parsers, haskell-bindings-dsl, haskell-deriving-compat, haskell-hackage-security, haskell-hcwiid, haskell-hsopenssl-x509-system, haskell-megaparsec, haskell-mono-traversable-instances, haskell-prim-uniq, haskell-raaz, haskell-readable, haskell-readline, haskell-relational-record, haskell-safe-exceptions, haskell-servant-client, haskell-token-bucket, haskell-zxcvbn-c, irclog2html, ironic-ui, lace, ledger, libdancer2-plugin-passphrase-perl, libdatetime-calendar-julian-perl, libdbix-class-optimisticlocking-perl, libdbix-class-schema-config-perl, libgeo-constants-perl, libgeo-ellipsoids-perl, libgeo-functions-perl, libgeo-inverse-perl, libio-async-loop-mojo-perl, libmojolicious-plugin-assetpack-perl, libmojolicious-plugin-renderfile-perl, libparams-validationcompiler-perl, libspecio-perl, libtest-time-perl, libtest2-plugin-nowarnings-perl, linux, lua-scrypt, mono, mutt-vc-query, neutron, node-ansi-font, node-buffer-equal, node-defaults, node-formatio, node-fs-exists-sync, node-fs.realpath, node-is-buffer, node-jison-lex, node-jju, node-jsonstream, node-kind-of, node-lex-parser, node-lolex, node-loud-rejection, node-random-bytes, node-randombytes, node-regex-not, node-repeat-string, node-samsam, node-set-value, node-source-map-support, node-spdx-correct, node-static-extend, node-test, node-to-object-path, node-type-check, node-typescript, node-unset-value, nutsqlite, opencv, openssl1.0, panoramisk, perl6, pg-rage-terminator, pg8000, plv8, puppet-module-oslo, pymoc, pyramid-jinja2, python-bitbucket-api, python-ceilometermiddleware, python-configshell-fb, python-ewmh, python-gimmik, python-jsbeautifier, python-opcua, python-pyldap, python-s3transfer, python-testing.common.database, python-testing.mysqld, python-testing.postgresql, python-wheezy.template, qspeakers, r-cran-nleqslv, recommonmark, rolo, shim, swift-im, tendermint-go-clist, tongue, uftrace & zaqar-ui.

27 September 2016

Kees Cook: security things in Linux v4.4

Previously: v4.3. Continuing with interesting security things in the Linux kernel, here s v4.4. As before, if you think there s stuff I missed that should get some attention, please let me know. seccomp Checkpoint/Restore-In-Userspace Tycho Andersen added a way to extract and restore seccomp filters from running processes via PTRACE_SECCOMP_GET_FILTER under CONFIG_CHECKPOINT_RESTORE. This is a continuation of his work (that I failed to mention in my prior post) from v4.3, which introduced a way to suspend and resume seccomp filters. As I mentioned at the time (and for which he continues to quote me) this feature gives me the creeps. :) x86 W^X detection Stephen Smalley noticed that there was still a range of kernel memory (just past the end of the kernel code itself) that was incorrectly marked writable and executable, defeating the point of CONFIG_DEBUG_RODATA which seeks to eliminate these kinds of memory ranges. He corrected this in v4.3 and added CONFIG_DEBUG_WX in v4.4 which performs a scan of memory at boot time and yells loudly if unexpected memory protection are found. To nobody s delight, it was shortly discovered the UEFI leaves chunks of memory in this state too, which posed an ugly-to-solve problem (which Matt Fleming addressed in v4.6). x86_64 vsyscall CONFIG I introduced a way to control the mode of the x86_64 vsyscall with a build-time CONFIG selection, though the choice I really care about is CONFIG_LEGACY_VSYSCALL_NONE, to force the vsyscall memory region off by default. The vsyscall memory region was always mapped into process memory at a fixed location, and it originally posed a security risk as a ROP gadget execution target. The vsyscall emulation mode was added to mitigate the problem, but it still left fixed-position static memory content in all processes, which could still pose a security risk. The good news is that glibc since version 2.15 doesn t need vsyscall at all, so it can just be removed entirely. Any kernel built this way that discovered they needed to support a pre-2.15 glibc could still re-enable it at the kernel command line with vsyscall=emulate . That s it for v4.4. Tune in tomorrow for v4.5!

2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

16 June 2016

John Goerzen: Mud, Airplanes, Arduino, and Fun

The last few weeks have been pretty hectic in their way, but I ve also had the chance to take some time off work to spend with family, which has been nice. Memorial Day: breakfast and mud For Memorial Day, I decided it would be nice to have a cookout for breakfast rather than for dinner. So we all went out to the fire ring. Jacob and Oliver helped gather kindling for the fire, while Laura chopped up some vegetables. Once we got a good fire going, I cooked some scrambled eggs in a cast iron skillet, mixed with meat and veggies. Mmm, that was tasty. Then we all just lingered outside. Jacob and Oliver enjoyed playing with the cats, and the swingset, and then . water. They put the hose over the slide and made a water slide (more mud slide maybe). IMG_7688 Then we got out the water balloon fillers they had gotten recently, and they loved filling up water balloons. All in all, we all just enjoyed the outdoors for hours. MVI_7738 Flying to Petit Jean, Arkansas Somehow, neither Laura nor I have ever really been to Arkansas. We figured it was about time. I had heard wonderful things about Petit Jean State Park from other pilots: it s rather unique in that it has a small airport right in the park, a feature left over from when Winthrop Rockefeller owned much of the mountain. And what a beautiful place it was! Dense forests with wonderful hiking trails, dotted with small streams, bubbling springs, and waterfalls all over; a nice lake, and a beautiful lodge to boot. Here was our view down into the valley at breakfast in the lodge one morning: IMG_7475 And here s a view of one of the trails: IMG_7576 The sunset views were pretty nice, too: IMG_7610 And finally, the plane we flew out in, parked all by itself on the ramp: IMG_20160522_171823 It was truly a relaxing, peaceful, re-invigorating place. Flying to Atchison Last weekend, Laura and I decided to fly to Atchison, KS. Atchison is one of the oldest cities in Kansas, and has quite a bit of history to show off. It was fun landing at the Amelia Earhart Memorial Airport in a little Cessna, and then going to three museums and finding lunch too. Of course, there is the Amelia Earhart Birthplace Museum, which is a beautifully-maintained old house along the banks of the Missouri River. IMG_20160611_134313 I was amused to find this hanging in the county historical society museum: IMG_20160611_153826 One fascinating find is a Regina Music Box, popular in the late 1800s and early 1900s. It operates under the same principles as those that you might see that are cylindrical. But I am particular impressed with the effort that would go into developing these discs in the pre-computer era, as of course the holes at the outer edge of the disc move faster than the inner ones. It would certainly take a lot of careful calculation to produce one of these. I found this one in the Cray House Museum: VID_20160611_151504 An Arduino Project with Jacob One day, Jacob and I got going with an Arduino project. He wanted flashing blue lights for his police station , so we disassembled our previous Arduino project, put a few things on the breadboard, I wrote some code, and there we go. Then he noticed an LCD in my Arduino kit. I hadn t ever gotten around to using it yet, and of course he wanted it immediately. So I looked up how to connect it, found an API reference, and dusted off my C skills (that was fun!) to program a scrolling message on it. Here is Jacob showing it off: VID_20160614_074802.mp4

30 March 2016

Mike Gabriel: Pushing X.org Git repos to Github et al.

TL;DR; If you want to know why cloning several of the X.org repositories to Github or GitLab instances fails and how this can be worked around, you may want to continue reading. Why we stumbled over this issue... As a joint effort of the Arctica Project, TheQVD and X2Go, Ulrich Sibiller and I are currently preparing a build workflow for the nxagent X-server (version 3) [1] that allows building nxagent against the modular X.org 7.0 (using autoconf and automake) rather than the monolithic build workflow of X.org 6.9 (using ancient imake). Our goal is to rewind all X.org components required for building nxagent back to a state where nxagent successfully builds and runs. Then we will go through various, (probably) monthly cycles of Our first hurdle... After Ulrich now has a functioning nxagent-against-X.org-7.0-workflow locally, we want to get everything into the Arctica Project's Github namespace. Which fails...
[mike@minobo xorg.upstream]$ git clone https://anongit.freedesktop.org/git/xorg/lib/libX11.git
Cloning into 'libX11'...
remote: Counting objects: 18520, done.
remote: Compressing objects: 100% (3441/3441), done.
remote: Total 18520 (delta 15094), reused 18325 (delta 14954)
Receiving objects: 100% (18520/18520), 5.98 MiB   549.00 KiB/s, done.
Resolving deltas: 100% (15094/15094), done.
Checking connectivity... done.
[mike@minobo xorg.upstream]$ cd libX11/
[mike@minobo libX11 (master)]$ git remote rename origin upstream 
[mike@minobo libX11 (master)]$ git remote add origin git@github.com:ArcticaProject/libX11.git
[mike@minobo libX11 (master)]$ git push --mirror origin 
Counting objects: 18520, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (3301/3301), done.
remote: error: object 70d5e4d45dd7bf1e05b099cb5a4dd529344084f0: missingSpaceBeforeDate: invalid author/committer line - missing space before date
remote: fatal: Error in object
error: pack-objects died of signal 13
error: failed to push some refs to 'git@github.com:ArcticaProject/libX11.git'
This issue should be a known issue at X.org / Mesa upstream [2]. The underlying problem occurs with Git 2.5 and above... If you run git-fsck from Git (>= 2.5) against several of the X.org Git repositories, you will stumble over the above issue. For libX11.git, we see these errors reported when running git-fsck from current Debian unstable:
mike@sid:~/xorg/libX11.git$ git fsck 
Checking object directories: 100% (256/256), done.
error in tag 70d5e4d45dd7bf1e05b099cb5a4dd529344084f0: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag a3bfe698090a5d41f1e9acb1b57a049085d6b04e: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag c1fc8c9aec1bc92d03d08d5e986dd40b194a7a3e: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag d4c27f7e7d8e2ea32418f341ad85c33f3b76862d: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 79780379aee21bf4d4bcb046a3b54774893ea6b1: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag a638de80ec20dc1de625cd323ed19a6646fecf2e: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag f6a6414e4e52a971a35fd118c350567e2383d034: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag b5c6810e21be2e7ccfac8f0539d46bf75dbe50a0: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 86cec9d20428cc190ec7278a0abe481b180288ac: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag ac912988091574790c7cffbbc2c60b8c59f8fcef: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 2d01674092951cd3284b3b099978b2436ea468ad: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 922d0534154668918eaa09a36cc5cd53fc6b71b7: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 8ef91b9cc0a4a7b0361f6b40b3f735221c8272cb: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 1516173727005a87aa501e1ad708d72e6ae6e753: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 77c9e63a3abff1593ccbed249b2c3ae7f68e1833: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 58044c7087137ce2b521297dbb31934ccaedc94e: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 2e0d061c6830891dcc856a04aac900e5bb6d0779: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 2f58c7cc4cbf3d30927f6301039b6bf018ed38fd: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 77fc2e1e26019c3ba92275d249fe1979570255d2: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 9b2e06fa262f23dbcbbccd42be6477e08a452268: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 7f99410bb9cc9f7cbbc2d43d8ad044771a6eec0a: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 99b59b83f126b4abb9da89f577a5b8147d543c25: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 97170c33f99761acd932f968e21d6d7664e7ba80: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag b39966d452127c992987d082f69321f950ae4c39: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 05e20c3bd8ac6b7e873607db73894486a4ac0519: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag f478497b5a3e9d6b92237e0f408c9f0f18108f52: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag c5ed936f28ce70c8932e33d67a6e06d140e35ba8: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 2b6beba295a6ff997dfe15a732c95ae6577fa735: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 32edbb1e9c5b7d1c6f8639dab85e5368d303df7f: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 490fbe2afb9ffc7057c0eb150f7cdd4d4fa8686c: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag fc605ac48bd7d675d6d04162a0f3bfebe27f2037: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 6289ecc26d1d505f395c59da744a9354d7aad2a2: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 8c9c30dac72aebd1a2c31a55a47c0ac11bc328db: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 3b7aa57994924be0dad693e837559d9ee900299c: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag b02ad9a8ec96e3c1027ca71289aeef5b8748e17e: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 4402ab1a86226a9cd0ce84fb8147b1c4958c684d: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag bb02b4fbde316644c771d28d5edeeb9b213a6d6a: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 95b6462c70213b562c9d2b8726eda706edc9c456: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag a99fea43ad53907623f269942c55237c238645e0: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 0c417ce98a6cf927ce7a1359b198b2bb746c707e: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag ce3ff6b102f360fc9dcd3a85361f44da054de0b1: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 2fb7b16b38a365c5009bd170d4463634c1f3de26: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 95570f484bfde5fc2ed8548cb17b87d8114a2866: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag d6e5895c104cfe0d135494a605013dd2910a93a1: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 49be7144523c4a56afcb389a9aa021d167710d2c: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 020353d1b982ac938d229039ca13bfea5793fcd9: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 991792d4f8477ab65ad4d8a0245c9a3bbb1f0294: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag e8037555efe299a7143370b485f754f6b08ffac7: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 4fdc8e36a5022f3e32d5567305d8e9cde4011f1a: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 9c93a19a76163328e30a7306bb0babe3175fb9a2: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 9f90bf4424250f80340b3e2803d343c74d2adb0b: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 2e7a8c7974777e3e869de9013025dc3f61633e1e: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 59b7c06da2a64d72668394e9239e3dfe8a5cff59: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 0263c4f684faa9a5126a0db524d5d35be7277e52: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 1514e654c1a9b346cbfca5a81f6d331f106242aa: missingSpaceBeforeDate: invalid author/committer line - missing space before date
error in tag 6aa83ec3d1d82ee332cedcf4e99d3cbd56e73edd: missingSpaceBeforeDate: invalid author/committer line - missing space before date
Checking objects: 100% (18523/18523), done.
What actually is going wrong here... When looking at one of those tags (luckily, only tags are affected), we see that something's not ok with the date string of the tag (I am not a Git plumber, so sorry for being superficial here):
mike@sid:~/xorg/libX11.git$ git show 2fb7b16b38a365c5009bd170d4463634c1f3de26
tag XORG-RELEASE-1-TM-MERGE
Tagger: Alan Coopersmith 
Date:   Thu Jan 1 00:00:00 1970 +0000
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
commit 84adc60cae8e944777563501dd633622a28bdb3b
Author: Alan Coopersmith 
Date:   Sat Mar 27 07:08:16 2004 +0000
    Fix typo (IsModiferKey should be IsModifierKey)
[... not quoting the diff for the above commit ...]
How to work around that issue... We then got the idea that we could redo those tags with the date string derived from the commit that this tag points at. This can be scripted (credits go to an unnamed person in Ulrich's surrounding!!!):
mike@sid:~/xorg/libX11$ cat repair.sh
#!/bin/bash
set -x
git fsck 2>&1   tee git-fsck.log   grep "error in tag"   sed -r -e 's/error in tag ([0-9a-f]+):.*/\1/p'   while read broken; do
         echo $broken
         commit=$(git rev-parse $ broken ^ commit )
         tag=$(git cat-file tag $broken   sed -n 's/^tag //p')
         tag_msg=$(git cat-file tag $broken   sed -n '/^$/,$p'   tail -n +2)
         export GIT_COMMITTER_NAME="$(git log -1 --format='%cn' $commit)"
         export GIT_COMMITTER_EMAIL="$(git log -1 --format='%ce' $commit)"
         export GIT_COMMITTER_DATE="$(git log -1 --format='%cD' $commit)"
         git tag -a -f -m "$tag_msg" $tag $commit
done
# drop all old content...
git reflog expire --expire=all --all
git gc --prune=all
# no more errors should be reported...
git fsck
Copy this repair.sh into the Git repo itself and let the script walk over your tags. After this script has been applied you can push the X.org Git repo with broken tags redone to Git hosters like Github or GitLab. light+love,
Mike [1] https://github.com/ArcticaProject/nx-libs
[2] https://lists.freedesktop.org/archives/mesa-dev/2016-February/106268.html

20 March 2016

Norbert Preining: Debian/TeX Live 2015.20160320-1 and biber 2.4-1

About one month has passed and here is the usual updated of TeX Live packages for Debian, this time also with an update to biber to accompany the updated version of biblatex. This will be (probably) the last upload before TeX Live 2015 gets frozen in preparation for 2016. After the freeze there will be some time of piece, and updates will got to experimental with the new binaries. Debian - TeX Live 2015 From the list of updated and new packages I want to mention that biblatex has been updated together with biber, and most biblatex styles should already by updated to work with the new biblatex, too. From the new packages one has in particular caught my attention: prooftrees. As I am writing a lot of proofs in my research, I am happy to see new package dealing with this, especially because this package is built on top of the excellent package forest (which in turn uses tikz), allowing for very compact proofs. Updated packages aastex, abntex2, academicons, acro, amsmath, animate, archaeologie, asciilist, babel, babel-french, babel-friulan, babel-russian, babel-spanish, beamertheme-metropolis, bibexport, biblatex, biblatex-apa, biblatex-bookinarticle, biblatex-caspervector, biblatex-chem, biblatex-fiwi, biblatex-gost, biblatex-manuscripts-philology, biblatex-nature, biblatex-philosophy, biblatex-phys, biblatex-publist, biblatex-realauthor, biblatex-science, bxjscls, cabin, caption, cbfonts-fd, celtic, chemmacros, computational-complexity, crimson, csplain, datetime2-english, diagbox, disser, droit-fr, dtk, dvips, ejpecp, emisa, factura, fibeamer, fithesis, forest, gost, hobby, hyperxmp, inconsolata, lualatex-math, mandi, mcf2graph, media9, nameauth, ocgx2, parades, pdftex, pkuthss, poetrytex, ptex, reledmac, roundrect, showhyphens, siunitx, spath3, splitindex, suftesi, tcolorbox, tetex, tex4ht, texinfo, texlive-scripts, thuthesis, titlesec, turabian-formatting, unicode-data, uptex, velthuis, venndiagram, xassoccnt, xint, xsavebox. New packages beamercolorthemeowl, bibletext, cochineal, formation-latex-ul, gobble, keyvaltable, lroundrect, luatex85, mathpartir, miama, multidef, parades, pgfornament, prooftrees, visualpstricks, visualtikz, xsavebox, ycbook. Enjoy.

21 February 2016

Vincent Sanders: Stack 'em, pack 'em and rack 'em.

As you may be aware I have a bit of a problem with Single Board Computers in that I have a lot of them. Keeping them organised has turned into a bit of a problem.

cluttered shelf of SBC
I designed clip cases for many of these systems giving me a higher storage density on my rack shelves and built a power supply to reduce the cabling complexity. These helped but I still ended up with a cluttered shelf full of SBC.

I decided I would make a rack enclosure to hold the SBC, I was limited to material I could easily CNC machine which limited me to acrylic plastics or wood.

laser cutting the design, viewed through heavily tinted filterInitially I started with the idea of housing the individual boards in a toast rack arrangement. This would mean that the enclosure would have to be at least 2U high to fit the boards all the existing cases would have to be discarded. This approach was dropped when the drawbacks of having no flexibility and only being able to fit the units that were selected at design time became apparent (connector cutouts and mounting hole placement.

Instead I changed course to try and incorporate the existing cases which already solved the differing connector and mounting placement problem and gave me a uniform size to consider. Once I had this approach the design came pretty quickly. I used a tube girder construction 1U in height to get as much strength as possible from the 3mm acrylic plastic I would use.

laser cut pieces arranged for assembly still with protective film on
The design was simply laser cut from sheet stock and fastened together with M3 nut and bolts. Once I corrected the initial design errors (I managed to get almost every important dimension wrong on the first attempt) the result was a success.

working prototype resting on initial version
The prototype is a variety of colours because makespace ran out of suitably sized clear acrylic stock but the colouring has no effect on the result other than aesthetical. The structure gives a great deal of rigidity and there is no sagging or warping, indeed testing on the prototype got to almost 50Kg loading without a failure (one end clamped and the other end loaded at 350mm distance)

I added some simple rotating latches at the front which keep the modules held in place and allow units to be removed quickly if necessary.

rack slots installed and in use
Overall this project was successful and I can now pack five SBC per U neatly. It does limit me to using systems cased in my "slimline" designs (68x30x97mm) which currently means the Raspberry Pi B+ style and the Orange Pi PC.

Once small drawback is access to I/O and power connectors. These need to be right angled and must be unplugged before unit removal which can be a little fiddly. Perhaps a toast rack design of cases would have given easier connector access but I am happy with this trade off of space for density.

As usual the design files are freely available, perhaps they could be useful as a basis for other laser cut rack enclosure designs.

15 February 2016

Julien Danjou: Timeseries storage and data compression

The first major version of the scalable timeserie database I work on, Gnocchi was a released a few months ago. In this first iteration, it took a rather naive approach to data storage. We had little ideas about if and how our distributed back-ends were going to be heavily used, so we stuck to the code of the first proof-of-concept written a couple of years ago. Recently we got more feedbacks from our users, ran a few benchmarks. That gave us enough feedback to start investigating in improving our storage strategy. Data split Up to Gnocchi 1.3, all data for a single metric are stored in a single gigantic file per aggregation method (min, max, average ). This means that the file can grow to several megabytes in size, which make it slow to manipulate. For the next version of Gnocchi, our first work has been to rework that storage and split the data into smaller parts. Gnocchi Carbonara archives split The diagram above shows how data are organized inside Gnocchi. Until version 1.3, there would have been only one file for each aggregation methods. In the upcoming 2.0 version, Gnocchi will split all these data into smaller parts, where each data split is stored in a file/object. This allows to manipulate smaller pieces of data and to increase the parallelism of the CRUD operations on the back-end leading to large speed improvement. In order to split timeseries into several chunks, Gnocchi defines a maximum number of N points to keep per chunk, to limit their maximum size. It then defines a hash function that produces a non-unique key for any timestamp. It makes it easy to find in which chunk any timestamp should be stored or retrieved. Data compression Up to Gnocchi 1.3, the data stored for each metric is simply serialized using msgpack, a fast and small serialization format. Though, this format does not provide any compression. That means that storing data points needs 8 bytes for a timestamp (64 bits timestamp with nanosecond precision) and 8 bytes for a value (64 bits double-precision floating-point), plus some overhead (extra information and msgpack itself). After looking around on how to compress all these measures, I stumbled upon a paper from some Facebook engineers called about Gorilla, their in-memory timeserie database, entitled "Gorilla: A Fast, Scalable, In-Memory Time Series Database". For reference, part of this encoding is also used by InfluxDB in its new storage engine. The first technique I implemented is easy enough, and it's inspired from delta-of-delta encoding. Instead of storing each timestamp for each data point, and since all the data points are aggregated on a regular interval, we transpose points to be the time difference divided by the interval. For example, the suite of timestamps timestamps = [41230, 41235, 41240, 41250, 41255] is encoded into timestamps = [41230, 1, 1, 2, 1], interval = 5. This allows regular compression algorithms to reduce the size of the integer list using run-length encoding. To actually compress the values, I tried two different algorithms: I then benchmarked these solutions: Gnocchi Carbonara compression speed The XOR algorithm implemented in Python is pretty slow, compared to LZ4. Truth is that python-lz4 is fully implemented in C, which makes it fast. I've profiled my XOR implementation in Python, to discover that one operation took 20 % of the time: count_lead_and_trail_zeroes, which is in charge of counting the number of leading and trailing zeroes in a binary number. Gnocchi Carbonara compression XOR profiling I tried 2 Python implementations of the same algorithm (and submitted them to my friend and Python developer Victor Stinner by the way). The first version using string search with .index() is 10 faster than the second one that only do integer computation. Ah, Python As Victor explained, each Python operation is slow and there's a lot in the second version, whereas .index() is implemented in C and really well optimized and only needs 2 Python operations. Finally, I ended up optimizing that code by leveraging cffi to use directly ffsll() and flsll(). That decreased the run-time of count_lead_and_trail_zeroes by 45 %, making the entire XOR compression code speed increased by a small 7 %. This is not enough to catch up with LZ4 speed. At this stage, the only solution to achieve a high-speed would probably to go with a full C implementation. Gnocchi Carbonara compression size Considering the compression ratio of the different algorithms, they are pretty much identical. The worst case scenario (random values) for LZ4 compress down to 9 bytes per data point, whereas XOR can go down to 7.38 bytes per data point. In general XOR encoding beats LZ4 by 15 %, except for cases where all values are 0 or 1. However, LZ4 is faster than XOR by a factor of 4 -70 depending on cases. That means that we'll use LZ4 for data compression in Gnocchi 2.0. It's possible that we could achieve as fast compression/decompression algorithm, but I don't think it's worth the effort right now it'd represent a lot of code to write and to maintain.

20 December 2015

Lunar: Reproducible builds: week 34 in Stretch cycle

What happened in the reproducible builds effort between December 13th to December 19th: Infrastructure Niels Thykier started implementing support for .buildinfo files in dak. A very preliminary commit was made by Ansgar Burchardt to prevent .buildinfo files from being removed from the upload queue. Toolchain fixes Mattia Rizzolo rebased our experimental debhelper with the changes from the latest upload. New fixes have been merged by OCaml upstream. Packages fixed The following 39 packages have become reproducible due to changes in their build dependencies: apache-mime4j, avahi-sharp, blam, bless, cecil-flowanalysis, cecil, coco-cs, cowbell, cppformat, dbus-sharp-glib, dbus-sharp, gdcm, gnome-keyring-sharp, gudev-sharp-1.0, jackson-annotations, jackson-core, jboss-classfilewriter, jboss-jdeparser2, jetty8, json-spirit, lat, leveldb-sharp, libdecentxml-java, libjavaewah-java, libkarma, mono.reflection, monobristol, nuget, pinta, snakeyaml, taglib-sharp, tangerine, themonospot, tomboy-latex, widemargin, wordpress, xsddiagram, xsp, zeitgeist-sharp. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues, but not all of them: Patches submitted which have not made their way to the archive yet: reproducible.debian.net Packages in experimental are now tested on armhf. (h01ger) Arch Linux packages in the multilib and community repositories (4,000 more source packages) are also being tested. All of these test results are better analyzed and nicely displayed together with each package. (h01ger) For Fedora, build jobs can now run in parallel. Two are currently running, now testing reproducibility of 785 source packages from Fedora 23. mock/1.2.3-1.1 has been uploaded to experimental to better build RPMs. (h01ger) Work has started on having automatic build node pools to maximize use of armhf build nodes. (Vagrant Cascadian) diffoscope development Version 43 has been released on December 15th. It has been dubbed as epic! as it contains many contributions that were written around the summit in Athens. Baptiste Daroussin found that running diffoscope on some Tar archives could overwrite arbitrary files. This has been fixed by using libarchive instead of Python internal Tar library and adding a sanity check for destination paths. In any cases, until proper sandboxing is implemented, don't run diffosope on unstrusted inputs outside an isolated, throw-away system. Mike Hommey identified that the CBFS comparator would needlessly waste time scanning big files. It will now not consider any files bigger than 24 MiB 8 MiB more than the largest ROM created by coreboot at this time. An encoding issue related to Zip files has also been fixed. (Lunar) New comparators have been added: Android dex files (Reiner Herrmann), filesystem images using libguestfs (Reiner Herrmann), icons and JPEG images using libcaca (Chris Lamb), and OS X binaries (Clemens Lang). The comparator for Free Pascal Compilation Unit will now only be used when the unit version matches the compiler one. (Levente Polyak) A new multi-file HTML output with on-demand loading of long diffs is available through the --html-dir option. On-demand loading requires jQuery which path can be specified through the --jquery option. The diffs can also be simply browsed for non-JavaScript users or when jQuery is not available. (Joachim Breitner) Example of on-demand loading in diffosope Portability toward other systems has been improved: old versions of GNU diff are now supported (Mike McQuaid), suggestion of the appropriate locale is now the more generic en_US.UTF-8 (Ed Maste), the --list-tools option can now support multiple systems (Mattia Rizzolo, Levente Polyak, Lunar). Many internal changes and code clean-ups have been made, paving the way for parallel processing. (Lunar) Version 44 was released on December 18th fixing an issue affecting .deb lacking a md5sums file introduced in a previous refactoring (Lunar). Support has been added for Mozilla optimized Zip files. (Mike Hommey). The HTML output has been optimized in size (Mike Hommey, Esa Peuha, Lunar), speed (Lunar), and will now properly number lines (Mike Hommey). A message will always be displayed when lines are ignored at the end of a diff (Lunar). For portability and consistency, Python os.walk() function is now used instead of find to perform directory listing. (Lunar) Documentation update Package reviews 143 reviews have been removed, 69 added and 22 updated in the previous week. Chris Lamb reported 12 new FTBFS issues. News issues identified this week: random_order_in_init_py_generated_by_python-genpy, timestamps_in_copyright_added_by_perl_dist_zilla, random_contents_in_dat_files_generated_by_chasen-dictutils_makemat, timestamps_in_documentation_generated_by_pandoc. Chris West did some improvements on the scripts used to manage notes in the misc repository. Misc. Accounts of the reproducible builds summit in Athens were written by Thomas Klausner from NetBSD and Hans-Christoph Steiner from The Guardian Project. Some openSUSE developers are working on a hackweek on reproducible builds which was discussed on the opensuse-packaging mailing-list.

14 November 2015

Craig Small: Mixing pysnmp and stdin

Depending on the application, sometimes you want to have some socket operations going (such as loading a website) and have stdin being read. There are plenty of examples for this in python which usually boil down to making stdin behave like a socket and mixing it into the list of sockets select() cares about. A while ago I asked an email list could I have pysnmp use a different socket map so I could add my own sockets in (UDP, TCP and a zmq to name a few) and the Ilya the author of pysnmp explained how pysnmp can use a foreign socket map. This sample code below is merely an mixture of Ilya s example code and the way stdin gets mixed into the fold. I have also updated to the high-level pysnmp API which explains the slight differences in the calls.
  1. from time import time
  2. import sys
  3. import asyncore
  4. from pysnmp.hlapi import asyncore as snmpAC
  5. from pysnmp.carrier.asynsock.dispatch import AsynsockDispatcher
  6.  
  7.  
  8. class CmdlineClient(asyncore.file_dispatcher):
  9.     def handle_read(self):
  10.     buf = self.recv(1024)
  11.     print "you said  ".format(buf)
  12.  
  13.  
  14. def myCallback(snmpEngine, sendRequestHandle, errorIndication,
  15.                errorStatus, errorIndex, varBinds, cbCtx):
  16.     print "myCallback!!"
  17.     if errorIndication:
  18.         print(errorIndication)
  19.         return
  20.     if errorStatus:
  21.         print('%s at %s' % (errorStatus.prettyPrint(),
  22.               errorIndex and varBinds[int(errorIndex)-1] or '?')
  23.              )
  24.         return
  25.  
  26.     for oid, val in varBinds:
  27.     if val is None:
  28.         print(oid.prettyPrint())
  29.     else:
  30.         print('%s = %s' % (oid.prettyPrint(), val.prettyPrint()))
  31.  
  32. sharedSocketMap =   
  33. transportDispatcher = AsynsockDispatcher()
  34. transportDispatcher.setSocketMap(sharedSocketMap)
  35. snmpEngine = snmpAC.SnmpEngine()
  36. snmpEngine.registerTransportDispatcher(transportDispatcher)
  37. sharedSocketMap[sys.stdin] = CmdlineClient(sys.stdin)
  38.  
  39. snmpAC.getCmd(
  40.     snmpEngine,
  41.     snmpAC.CommunityData('public'),
  42.     snmpAC.UdpTransportTarget(('127.0.0.1', 161)),
  43.     snmpAC.ContextData(),
  44.     snmpAC.ObjectType(
  45.         snmpAC.ObjectIdentity('SNMPv2-MIB', 'sysDescr', 0)),
  46.     cbFun=myCallback)
  47.  
  48. while True:
  49.     asyncore.poll(timeout=0.5, map=sharedSocketMap)
  50.     if transportDispatcher.jobsArePending() or transportDispatcher.transportsAreWorking():
  51.         transportDispatcher.handleTimerTick(time())
Some interesting lines from the above code: With all this I can handle keyboard presses and network traffic, such as a simple SNMP poll.

18 October 2015

Lunar: Reproducible builds: week 25 in Stretch cycle

What happened in the reproducible builds effort this week: Toolchain fixes Niko Tyni wrote a new patch adding support for SOURCE_DATE_EPOCH in Pod::Man. This would complement or replace the previously implemented POD_MAN_DATE environment variable in a more generic way. Niko Tyni proposed a fix to prevent mtime variation in directories due to debhelper usage of cp --parents -p. Packages fixed The following 119 packages became reproducible due to changes in their build dependencies: aac-tactics, aafigure, apgdiff, bin-prot, boxbackup, calendar, camlmix, cconv, cdist, cl-asdf, cli-common, cluster-glue, cppo, cvs, esdl, ess, faucc, fauhdlc, fbcat, flex-old, freetennis, ftgl, gap, ghc, git-cola, globus-authz-callout-error, globus-authz, globus-callout, globus-common, globus-ftp-client, globus-ftp-control, globus-gass-cache, globus-gass-copy, globus-gass-transfer, globus-gram-client, globus-gram-job-manager-callout-error, globus-gram-protocol, globus-gridmap-callout-error, globus-gsi-callback, globus-gsi-cert-utils, globus-gsi-credential, globus-gsi-openssl-error, globus-gsi-proxy-core, globus-gsi-proxy-ssl, globus-gsi-sysconfig, globus-gss-assist, globus-gssapi-error, globus-gssapi-gsi, globus-net-manager, globus-openssl-module, globus-rsl, globus-scheduler-event-generator, globus-xio-gridftp-driver, globus-xio-gsi-driver, globus-xio, gnome-control-center, grml2usb, grub, guilt, hgview, htmlcxx, hwloc, imms, kde-l10n, keystone, kimwitu++, kimwitu-doc, kmod, krb5, laby, ledger, libcrypto++, libopendbx, libsyncml, libwps, lprng-doc, madwimax, maria, mediawiki-math, menhir, misery, monotone-viz, morse, mpfr4, obus, ocaml-csv, ocaml-reins, ocamldsort, ocp-indent, openscenegraph, opensp, optcomp, opus, otags, pa-bench, pa-ounit, pa-test, parmap, pcaputils, perl-cross-debian, prooftree, pyfits, pywavelets, pywbem, rpy, signify, siscone, swtchart, tipa, typerep, tyxml, unison2.32.52, unison2.40.102, unison, uuidm, variantslib, zipios++, zlibc, zope-maildrophost. The following packages became reproducible after getting fixed: Packages which could not be tested: Some uploads fixed some reproducibility issues but not all of them: Patches submitted which have not made their way to the archive yet: Lunar reported that test strings depend on default character encoding of the build system in ongl. reproducible.debian.net The 189 packages composing the Arch Linux core repository are now being tested. No packages are currently reproducible, but most of the time the difference is limited to metadata. This has already gained some interest in the Arch Linux community. An explicit log message is now visible when a build has been killed due to the 12 hours timeout. (h01ger) Remote build setup has been made more robust and self maintenance has been further improved. (h01ger) The minimum age for rescheduling of already tested amd64 packages has been lowered from 14 to 7 days, thanks to the increase of hardware resources sponsored by ProfitBricks last week. (h01ger) diffoscope development diffoscope version 37 has been released on October 15th. It adds support for two new file formats (CBFS images and Debian .dsc files). After proposing the required changes to TLSH, fuzzy hashes are now computed incrementally. This will avoid reading entire files in memory which caused problems for large packages. New tests have been added for the command-line interface. More character encoding issues have been fixed. Malformed md5sums will now be compared as binary files instead of making diffoscope crash amongst several other minor fixes. Version 38 was released two days later to fix the versioned dependency on python3-tlsh. strip-nondeterminism development strip-nondeterminism version 0.013-1 has been uploaded to the archive. It fixes an issue with nonconformant PNG files with trailing garbage reported by Roland Rosenfeld. disorderfs development disorderfs version 0.4.1-1 is a stop-gap release that will disable lock propagation, unless --share-locks=yes is specified, as it still is affected by unidentified issues. Documentation update Lunar has been busy creating a proper website for reproducible-builds.org that would be a common location for news, documentation, and tools for all free software projects working on reproducible builds. It's not yet ready to be published, but it's surely getting there. Homepage of the future reproducible-builds.org website  Who's involved?  page of the future reproducible-builds.org website Package reviews 103 reviews have been removed, 394 added and 29 updated this week. 72 FTBFS issues were reported by Chris West and Niko Tyni. New issues: random_order_in_static_libraries, random_order_in_md5sums.

14 October 2014

Julian Andres Klode: Key transition

I started transitioning from 1024D to 4096R. The new key is available at: https://people.debian.org/~jak/pubkey.gpg and the keys.gnupg.net key server. A very short transition statement is available at: https://people.debian.org/~jak/transition-statement.txt and included below (the http version might get extended over time if needed). The key consists of one master key and 3 sub keys (signing, encryption, authentication). The sub keys are stored on an OpenPGP v2 Smartcard. That s really cool, isn t it? Somehow it seems that GnuPG 1.4.18 also works with 4096R keys on this smartcard (I accidentally used it instead of gpg2 and it worked fine), although only GPG 2.0.13 and newer is supposed to work.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1,SHA512
Because 1024D keys are not deemed secure enough anymore, I switched to
a 4096R one.
The old key will continue to be valid for some time, but i prefer all
future correspondence to come to the new one.  I would also like this
new key to be re-integrated into the web of trust.  This message is
signed by both keys to certify the transition.
the old key was:
pub   1024D/00823EC2 2007-04-12
      Key fingerprint = D9D9 754A 4BBA 2E7D 0A0A  C024 AC2A 5FFE 0082 3EC2
And the new key is:
pub   4096R/6B031B00 2014-10-14 [expires: 2017-10-13]
      Key fingerprint = AEE1 C8AA AAF0 B768 4019  C546 021B 361B 6B03 1B00
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iEYEARECAAYFAlQ9j+oACgkQrCpf/gCCPsKskgCgiRn7DoP5RASkaZZjpop9P8aG
zhgAnjHeE8BXvTSkr7hccNb2tZsnqlTaiQIcBAEBCgAGBQJUPY/qAAoJENc8OeVl
gLOGZiMP/1MHubKmA8aGDj8Ow5Uo4lkzp+A89vJqgbm9bjVrfjDHZQIdebYfWrjr
RQzXdbIHnILYnUfYaOHUzMxpBHya3rFu6xbfKesR+jzQf8gxFXoBY7OQVL4Ycyss
4Y++g9m4Lqm+IDyIhhDNY6mtFU9e3CkljI52p/CIqM7eUyBfyRJDRfeh6c40Pfx2
AlNyFe+9JzYG1i3YG96Z8bKiVK5GpvyKWiggo08r3oqGvWyROYY9E4nLM9OJu8EL
GuSNDCRJOhfnegWqKq+BRZUXA2wbTG0f8AxAuetdo6MKmVmHGcHxpIGFHqxO1QhV
VM7VpMj+bxcevJ50BO5kylRrptlUugTaJ6il/o5sfgy1FdXGlgWCsIwmja2Z/fQr
ycnqrtMVVYfln9IwDODItHx3hSwRoHnUxLWq8yY8gyx+//geZ0BROonXVy1YEo9a
PDplOF1HKlaFAHv+Zq8wDWT8Lt1H2EecRFN+hov3+lU74ylnogZLS+bA7tqrjig0
bZfCo7i9Z7ag4GvLWY5PvN4fbws/5Yz9L8I4CnrqCUtzJg4vyA44Kpo8iuQsIrhz
CKDnsoehxS95YjiJcbL0Y63Ed4mkSaibUKfoYObv/k61XmBCNkmNAAuRwzV7d5q2
/w3bSTB0O7FHcCxFDnn+tiLwgiTEQDYAP9nN97uibSUCbf98wl3/
=VRZJ
-----END PGP SIGNATURE-----

Filed under: Uncategorized

Julian Andres Klode: Key transition

I started transitioning from 1024D to 4096R. The new key is available at: https://people.debian.org/~jak/pubkey.gpg and the keys.gnupg.net key server. A very short transition statement is available at: https://people.debian.org/~jak/transition-statement.txt and included below (the http version might get extended over time if needed). The key consists of one master key and 3 sub keys (signing, encryption, authentication). The sub keys are stored on an OpenPGP v2 Smartcard. That s really cool, isn t it? Somehow it seems that GnuPG 1.4.18 also works with 4096R keys on this smartcard (I accidentally used it instead of gpg2 and it worked fine), although only GPG 2.0.13 and newer is supposed to work.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1,SHA512
Because 1024D keys are not deemed secure enough anymore, I switched to
a 4096R one.
The old key will continue to be valid for some time, but i prefer all
future correspondence to come to the new one.  I would also like this
new key to be re-integrated into the web of trust.  This message is
signed by both keys to certify the transition.
the old key was:
pub   1024D/00823EC2 2007-04-12
      Key fingerprint = D9D9 754A 4BBA 2E7D 0A0A  C024 AC2A 5FFE 0082 3EC2
And the new key is:
pub   4096R/6B031B00 2014-10-14 [expires: 2017-10-13]
      Key fingerprint = AEE1 C8AA AAF0 B768 4019  C546 021B 361B 6B03 1B00
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iEYEARECAAYFAlQ9j+oACgkQrCpf/gCCPsKskgCgiRn7DoP5RASkaZZjpop9P8aG
zhgAnjHeE8BXvTSkr7hccNb2tZsnqlTaiQIcBAEBCgAGBQJUPY/qAAoJENc8OeVl
gLOGZiMP/1MHubKmA8aGDj8Ow5Uo4lkzp+A89vJqgbm9bjVrfjDHZQIdebYfWrjr
RQzXdbIHnILYnUfYaOHUzMxpBHya3rFu6xbfKesR+jzQf8gxFXoBY7OQVL4Ycyss
4Y++g9m4Lqm+IDyIhhDNY6mtFU9e3CkljI52p/CIqM7eUyBfyRJDRfeh6c40Pfx2
AlNyFe+9JzYG1i3YG96Z8bKiVK5GpvyKWiggo08r3oqGvWyROYY9E4nLM9OJu8EL
GuSNDCRJOhfnegWqKq+BRZUXA2wbTG0f8AxAuetdo6MKmVmHGcHxpIGFHqxO1QhV
VM7VpMj+bxcevJ50BO5kylRrptlUugTaJ6il/o5sfgy1FdXGlgWCsIwmja2Z/fQr
ycnqrtMVVYfln9IwDODItHx3hSwRoHnUxLWq8yY8gyx+//geZ0BROonXVy1YEo9a
PDplOF1HKlaFAHv+Zq8wDWT8Lt1H2EecRFN+hov3+lU74ylnogZLS+bA7tqrjig0
bZfCo7i9Z7ag4GvLWY5PvN4fbws/5Yz9L8I4CnrqCUtzJg4vyA44Kpo8iuQsIrhz
CKDnsoehxS95YjiJcbL0Y63Ed4mkSaibUKfoYObv/k61XmBCNkmNAAuRwzV7d5q2
/w3bSTB0O7FHcCxFDnn+tiLwgiTEQDYAP9nN97uibSUCbf98wl3/
=VRZJ
-----END PGP SIGNATURE-----

Filed under: Uncategorized

5 July 2014

John Goerzen: The Heights of Coronado

Near the beautiful Swedish town of Lindsborg, Kansas, there stands a hill known as Coronado Heights. It lies in the midst of the Smoky Hills, named for the smoke-like mist that sometimes hangs in them. We Kansans smile our usual smile when we tell the story of how Francisco V squez de Coronado famously gave up his search for gold after reaching this point in Kansas. Anyhow, it was just over a year ago that Laura, Jacob, Oliver, and I went to Coronado Heights at the start of summer, 2013 our first full day together as a family. Atop Coronado Heights sits a castle , an old WPA project from the 1930s: IMG_9803 IMG_9824 The view from up there is pretty nice: IMG_9806 And, of course, Jacob and Oliver wanted to explore the grounds. IMG_9813 As exciting as the castle was, simple rocks and sand seemed to be just as entertaining. IMG_9835 After Coronado Heights, we went to a nearby lake for a picnic. After that, Jacob and Oliver wanted to play at the edge of the water. They loved to throw rocks in and observe the splash. Of course, it pretty soon descended (or, if you are a boy, ascended ) into a game of splash your brother. And then to splash Dad and Laura . 2013-05-27 15 Fun was had by all. What a wonderful day! Writing the story reminds me of a little while before that the first time all four of us enjoyed dinner and smores at a fire by our creek. IMG_9756 Jacob and Oliver insisted on sitting or, well, flopping on Laura s lap to eat. It made me smile. (And yes, she is wearing a Debian hat.)

17 September 2013

Cyril Brulebois: Fixing bugs

It s a somewhat strange feeling to spend time fixing things that broke instead of implementing new things, but it s not like I m a creative guy anyway For those last two points, the long term plan is: Hopefully getting this done this week (famous last words, eh?).

27 July 2010

Petter Reinholdtsen: First Debian Edu test release (alpha0) based on Squeeze is released

I just posted this announcement culminating several months of work with the next Debian Edu release. Not nearly done, but one major step completed.
This is the first test release based on Squeeze. The focus of this release is to test the user application selection. To have a look, install the standalone profile and let the developers know if the set of installed packages i.e. applications should be modified. If some user application is missing, or if there are some applications that no longer make sense to be included in Debian Edu, please let us know. Also, if a useful application is missing the translation for your language of choice, please let us know too. In addition, feedback and help to polish the desktop (menus, artwork, starters, etc.) is appreciated. We would like to ship a nice and handy KDE4 desktop targeted for schools out of the box. The other profiles should be installable, but there is a lot more work left to be done before they are ready, so do not expect to much. Changes compared to the lenny based version
  • Everything from Debian Squeeze
    • Desktop environment KDE 4.4 => the new KDE desktop in combination with some new artwork
    • Web browser Iceweasel 3.5
    • OpenOffice.org 3.2
    • Educational toolbox GCompris 9.3
    • Music creator Rosegarden 10.04.2
    • Image editor Gimp 2.6.10
    • Virtual universe Celestia 1.6.0
    • Virtual stargazer Stellarium 0.10.4
    • 3D modeler Blender 2.49.2 (new application)
    • Video editor Kdenlive 0.7.7 (new application)
  • Now using Kerberos for password checking (migration not finished). Enabled for:
    • PAM
    • LDAP
    • IMAP
    • SMTP (sender verification)
  • New experimental roaming workstation profile for laptops.
  • Show welcome page to users when they first log in. The URL is fetched from LDAP.
  • New LXDE desktop option, in addition to KDE (default) and Gnome.
  • General cleanup (not finished)
The following features are not working as they should
  • No web based administration tool for creating users and groups. The scripts ldap-createuser-krb and ldap-add-user-to-group can be used for testing.
  • DVD installs are missing debian-installer images for the PXE boot, and do not set up the PXE menu on eth0 because of this. LTSP clients should still boot from eth1 on thin client servers.
  • The restructured KDE menu is not implemented.
  • The LDAP server setup need to be reviewed for security.
  • The LDAP directory structure need to be reworked.
  • Different sets of packages are installed when using the DVD and the netinst CD. More packages are installed using the netinst CD.
  • The jackd package fail to install. This is believed to be caused by some ongoing transition, and hopefully should be solved soon. The jackd1 package can be installed manually for those that need it.
  • Some packages lack translations. See http://wiki.debian.org/DebianEdu/Status/Squeeze for updated status, and help out with translations.
To download this multiarch netinstall release you can use To download this multiarch dvd release you can use There is no source DVD available yet. It will be prepared when we get closer to the final release. The MD5SUM of these images are
  • 3dbf45d59f42a53518b6e3c9ec3b5eb6 debian-edu-6.0.0+edua0-CD.iso
  • 22f2cbfce281d1c6e478be452638675d debian-edu-6.0.0+edua0-DVD.iso
The SHA1SUM of these images are
  • c53d1b69b40cf37cd27aefaf33f6f6a3821bedf0 debian-edu-6.0.0+edua0-CD.iso
  • 2ec29d7db676d59d32197b05c277ffe16348376c debian-edu-6.0.0+edua0-DVD.iso
How to report bugs: http://wiki.debian.org/DebianEdu/HowTo/ReportBugsInBugzilla Please direct replies to debian-edu@lists.debian.org

23 April 2010

Jonathan McDowell: Out, damn'd PGP v3

Nearly a year ago people starting worrying about the complexity of SHA-1 being reduced and the potential availability of viable attacks against things such as PGP keys that used SHA-1. Many people (myself included) generated a new key, or updated preferences on keys that were otherwise strong enough. There were worries about what this might mean for Debian. We were getting ahead of ourselves a bit though. Firstly there haven't been any public viable attacks that I'm aware of (though of course this doesn't mean we shouldn't continue to migrate away), but secondly there's a much easier method of attack. PGP v3 keys. To quote RFC4880:

V3 keys are deprecated. They contain three weaknesses. First, it is relatively easy to construct a V3 key that has the same Key ID as any other key because the Key ID is simply the low 64 bits of the public modulus. Secondly, because the fingerprint of a V3 key hashes the key material, but not its length, there is an increased opportunity for fingerprint collisions. Third, there are weaknesses in the MD5 hash algorithm that make developers prefer other algorithms. See below for a fuller discussion of Key IDs and fingerprints.
At the time of writing Debian has 21 remaining v3 keys. This is a significant improvement over a year ago, when we had 200, but it's still 21 more than I'd like. I've been chasing people since last May (starting with those who had v3 + v4 keys, all of whom now only have a v4 key) and we're down to the stragglers. So it's time to name and shame, in the hope of kicking them into action. The following keys are what's left (doesn't match the currently active keyring because we've had a few replacements since the last promote):

0x0D2156BD3D97C149 Michael Stone <mstone>
0x225FD911CD269B31 Carlos Barros <cbf>
0x31E73F14E298966D James R. Van Zandt <jrv>
0x366CD3FEEBC11B01 Chris Waters <xtifr>
0x37A73FE355E8BC4D Frederic Lepied <lepied>
0x3E973117DCC528E9 Ardo van Rangelrooij <ardo>
0x5C7A46637953F711 Rich Sahlender <rsahlen>
0x5D6560F85F30F005 Craig Brozefsky <craig>
0x6B0E322836129171 Jim Westveer <jwest>
0x723724B4A5B6DD31 Christian Meder <meder>
0x7629B22ED71DAABD Adrian Bridgett <bridgett>
0x8FFC405EFD5A67CD Adam Di Carlo <aph>
0xB0D269DE17F3D4D1 Matthew Vernon <matthew>
0xBC151FC8D2A913A1 Peter S Galbraith <psg>
0xC1A0A171C2DCD3B1 Jim Mintha <jmintha>
0xC3168EBA23F5ADDB Ian Jackson <iwj>
0xCE951B1160D74C7D Patrick Cole <ltd>
0xE82A8B0D57137FE5 Paul Seelig <pseelig>
0xF20E242CE77AC835 Brian White <bcwhite>
0xFBAA570C3087194D Alan Bain <afrb2>
0xFFD1B4AC7C19FD19 David Engel <david>

Of these keys only 2 voted in the recent DPL election. 8 have failed to make any response to my mails (3 since last August). Only 9 have uploaded a package since August 2008. And 10 were already known to the MIA database. Some of them have stated they'll sort out a new key, but not yet done so.

If you are one of these people, please either get a new key sorted and signed and reply to the mails I've sent you, or reply and say you no longer wish to be involved in Debian. And if you know any of these people, encourage them to get a new key sorted and offer to sign it for them.

30 December 2008

Mike Hommey: Another threat to the internet

Some people presented a rogue Certificate Authority at this year s CCC. What is surprising is not so much that they could create such a rogue CA, but the fact that MD5, despite having been broken for several years, is still in use by some important CAs to sign SSL certificates. Amazing.

10 November 2008

Kurt Roeckx: Finding which sector belongs to which file on a RAID device.

I got this nice message (and some others) in my log today:
smartd: Device: /dev/sdd, 2 Currently unreadable (pending) sectors After running a smartctl -t long /dev/sdd smartctl now reports:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  9 Power_On_Hours          0x0032   089   089   000    Old_age   Always       -       8719
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0012   200   200   000    Old_age   Always       -       2
198 Offline_Uncorrectable   0x0010   200   200   000    Old_age   Offline      -       0
[...]
SMART Error Log Version: 1
ATA Error Count: 10 (device log contains only the most recent five errors)
[...]
Error 10 occurred at disk power-on lifetime: 8692 hours (362 days + 4 hours)
  When the command that caused the error occurred, the device was active or idle.
  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 08 78 e5 89 e0  Error: UNC 8 sectors at LBA = 0x0089e578 = 9037176
  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  25 00 08 78 e5 89 2f 00  45d+06:36:27.880  READ DMA EXT
  25 00 08 70 e5 89 2f 00  45d+06:36:27.798  READ DMA EXT
  25 00 08 68 e5 89 2f 00  45d+06:36:27.308  READ DMA EXT
  25 00 08 e0 77 e5 35 00  45d+06:36:27.295  READ DMA EXT
  25 00 08 60 e5 89 2f 00  45d+06:36:26.653  READ DMA EXT
[...]
Error 9 occurred at disk power-on lifetime: 8692 hours (362 days + 4 hours)
  When the command that caused the error occurred, the device was active or idle.
  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 08 60 e5 89 e0  Error: UNC 8 sectors at LBA = 0x0089e560 = 9037152
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed: read failure       90%      8715         797566305
# 2  Extended offline    Completed without error       00%      8198         -
Notice that 197 Current_Pending_Sector indicates that there are 2 sectors with a problem. Note that the SMART error log shows LBA sectors 9037176 (0x89e578) and 9037152 (0x89e560) but it's limited to 0xffffff. The self test logs shows us the proper LBA error at 797566305 (0x2f89e561) which has a 0x2f in front, and also shows us that it's the second sector in the block of 8 we tried to read. The kernel log also shows:
ata4.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0
ata4.00: (BMDMA2 stat 0xc0009)
ata4.00: cmd 25/00:08:60:e5:89/00:00:2f:00:00/e0 tag 0 cdb 0x0 data 4096 in
         res 51/40:00:61:e5:89/00:00:2f:00:00/f0 Emask 0x9 (media error)
So I've tried to reproduce the error with:
# dd if=/dev/sdd of=797566305 skip=797566305 bs=512 count=1 iflag=direct
1+0 records in
1+0 records out
512 bytes (512 B) copied, 1.27727 s, 0.4 kB/s
Notice that it can actually read that block, it just seems to take a while. And the error log now contains:
  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 01 61 e5 89 e0  Error: UNC 1 sectors at LBA = 0x0089e561 = 9037153
Because it's part of a RAID device I could just let the whole disk resync, but I was a little curious which files were in it. So I found this document that explains on how to find which files has the problem. But it only has examples covering ext2/ext3 on a partition and LVM and nothing about RAID device. So it looks like someone using RAID 0 will have a hard time finding the file that's having a problem. So following the document we start with:
# fdisk -lu /dev/sdd
Disk /dev/sdd: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x000578f9
   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1   *          63      385559      192748+  fd  Linux raid autodetect
/dev/sdd2       972864270   976768064     1951897+  fd  Linux raid autodetect
/dev/sdd3          385560   972864269   486239355   fd  Linux raid autodetect
Partition table entries are not in disk order
In /proc/mdstat we see:
md1 : active raid5 sda3[0] sdd3[3] sdc3[2] sdb3[1]
      1458717696 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
So sdd3 is part of md1 and I have an ext3 filesytsem on that. Tune2fs -l grep Block gives:
Block count:              364679424
Block size:               4096
Blocks per group:         32768
mdadm tells me:
# mdadm -QE /dev/sdd3
/dev/sdd3:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : 747b071a:a70f723b:c3a9ed44:15845cbf
  Creation Time : Mon Nov 12 20:13:51 2007
     Raid Level : raid5
  Used Dev Size : 486239232 (463.71 GiB 497.91 GB)
     Array Size : 1458717696 (1391.14 GiB 1493.73 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 1
    Update Time : Mon Nov 10 10:57:28 2008
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 4b3147f4 - correct
         Events : 78
         Layout : left-symmetric
     Chunk Size : 64K
      Number   Major   Minor   RaidDevice State
this     3       8       51        3      active sync   /dev/sdd3
   0     0       8        3        0      active sync   /dev/sda3
   1     1       8       19        1      active sync   /dev/sdb3
   2     2       8       35        2      active sync   /dev/sdc3
   3     3       8       51        3      active sync   /dev/sdd3
I've been told that 0.90 stores the superblock at the end of the device, so that makes it a little easier. And this seems to agree:
# cat /sys/block/md1/md/dev-sdd3/offset
0
We have a left-symmetric layout, so in raid5_compute_sector() we see
	stripe = chunk_number / data_disks;
	*dd_idx = chunk_number % data_disks;
[...]
	case ALGORITHM_LEFT_SYMMETRIC:
		*pd_idx = data_disks - stripe % raid_disks;
		*dd_idx = (*pd_idx + 1 + *dd_idx) % raid_disks;
So it looks like the chunks are located on the disk like:
stripesdasdbsdcsdd
0D 0D 1D 2P 0-2
1D 4D 5P 3-5D 3
2D 8P 6-8D 6D 7
3P 9-11D 9D 10D 11
4D 12D 13D 14P 12-14
With each of those 64K. The problem is at 797566305 and the partition started at 385560. So that's at sector 797180745 inside partition sdd3. Each sector is 512 bytes and a chunk is 65536 bytes which gets us 128 sectors per chunk. If we want to know which the stride the sector is on we do: 797180745/128=6227974.5703125 This means that the stripe is number 6227974 and that it's sector 0.5703125*128=73 on sdd3 within that chunk. The parity is on disk: pd_idx = 3 - 6227974 % 4 = 1, being sdb. To get the chunk we need to multiply with the number of data disks (3) and since it's the forth disk and the parity is on the second disk we need to add 1 and get: 6227974*3+1=18683923 So calculating that back to sectors we multiply it with the number of sectors per chunk again and add our offset in the chunk: 18683923*128+73=2391542217 To test that our math is any good we try:
# dd if=/dev/md1 of=797566305.2 skip=2391542217 bs=512 count=1 iflag=direct
1+0 records in
1+0 records out
512 bytes (512 B) copied, 5.99103 s, 0.1 kB/s
We see an error message in the kernel log, there are 2 new errors in the error log, so it seems that everything is calculated correctly. The files are also identical. The file system is in blocks of 4096 bytes, so for the file system this is block 2391542217*512/4096=298942777.125 Then we use debugfs:
# debugfs
debugfs 1.41.2 (02-Oct-2008)
debugfs:  open /dev/md1
debugfs:  icheck 298942777
Block   Inode number
298942777       <block not found>
Which isn't making any sense to me, since I get that error about every 2 hour when a certain cronjob runs. 298942777 / 32768 (Blocks per group) = 9123.00955. So I wonder if ext3 is keeping some meta data there and debugfs doesn't tell that? Anyway, at some point I was running a smartctl -t long /dev/sdd again because I couldn't reproduce the error anymore, the cronjob started, and I then found alot of errors in my log again and now it included:
raid5:md1: read error corrected (8 sectors at 797180744 on sdd3)
It's so much fun that disks always behaves the same trying to read the same sector. Smartctl now reports:
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0012   200   200   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   200   200   000    Old_age   Offline      -       0
I assume the kernel removed the pending sector by writing to it. The other pending error was already gone for some time, I assume new data got written to that one.

21 September 2008

Wouter Verhelst: SSL "telnet"

A common way to debug a network server is to use 'telnet' or 'nc' to connect to the server and issue some commands in the protocol to verify whether everything is working correctly. That obviously only works for ASCII protocols (as opposed to binary protocols), and it obviously also only works if you're not using any encryption. But that doesn't mean you can't test an encrypted protocol in a similar way, thanks to openssl's s_client:
wouter@country:~$ openssl s_client -host samba.grep.be -port 443
CONNECTED(00000003)
depth=0 /C=BE/ST=Antwerp/L=Mechelen/O=NixSys BVBA/CN=svn.grep.be/emailAddress=wouter@grep.be
verify error:num=18:self signed certificate
verify return:1
depth=0 /C=BE/ST=Antwerp/L=Mechelen/O=NixSys BVBA/CN=svn.grep.be/emailAddress=wouter@grep.be
verify return:1
---
Certificate chain
 0 s:/C=BE/ST=Antwerp/L=Mechelen/O=NixSys BVBA/CN=svn.grep.be/emailAddress=wouter@grep.be
   i:/C=BE/ST=Antwerp/L=Mechelen/O=NixSys BVBA/CN=svn.grep.be/emailAddress=wouter@grep.be
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIDXDCCAsWgAwIBAgIJAITRhiXp+37JMA0GCSqGSIb3DQEBBQUAMH0xCzAJBgNV
BAYTAkJFMRAwDgYDVQQIEwdBbnR3ZXJwMREwDwYDVQQHEwhNZWNoZWxlbjEUMBIG
A1UEChMLTml4U3lzIEJWQkExFDASBgNVBAMTC3N2bi5ncmVwLmJlMR0wGwYJKoZI
hvcNAQkBFg53b3V0ZXJAZ3JlcC5iZTAeFw0wNTA1MjEwOTMwMDFaFw0xNTA1MTkw
OTMwMDFaMH0xCzAJBgNVBAYTAkJFMRAwDgYDVQQIEwdBbnR3ZXJwMREwDwYDVQQH
EwhNZWNoZWxlbjEUMBIGA1UEChMLTml4U3lzIEJWQkExFDASBgNVBAMTC3N2bi5n
cmVwLmJlMR0wGwYJKoZIhvcNAQkBFg53b3V0ZXJAZ3JlcC5iZTCBnzANBgkqhkiG
9w0BAQEFAAOBjQAwgYkCgYEAsGTECq0VXyw09Zcg/OBijP1LALMh9InyU0Ebe2HH
NEQ605mfyjAENG8rKxrjOQyZzD25K5Oh56/F+clMNtKAfs6OuA2NygD1/y4w7Gcq
1kXhsM1MOIOBdtXAFi9s9i5ZATAgmDRIzuKZ6c2YJxJfyVbU+Pthr6L1SFftEdfb
L7MCAwEAAaOB4zCB4DAdBgNVHQ4EFgQUtUK7aapBDaCoSFRWTf1wRauCmdowgbAG
A1UdIwSBqDCBpYAUtUK7aapBDaCoSFRWTf1wRauCmdqhgYGkfzB9MQswCQYDVQQG
EwJCRTEQMA4GA1UECBMHQW50d2VycDERMA8GA1UEBxMITWVjaGVsZW4xFDASBgNV
BAoTC05peFN5cyBCVkJBMRQwEgYDVQQDEwtzdm4uZ3JlcC5iZTEdMBsGCSqGSIb3
DQEJARYOd291dGVyQGdyZXAuYmWCCQCE0YYl6ft+yTAMBgNVHRMEBTADAQH/MA0G
CSqGSIb3DQEBBQUAA4GBADGkLc+CWWbfpBpY2+Pmknsz01CK8P5qCX3XBt4OtZLZ
NYKdrqleYq7r7H8PHJbTTiGOv9L56B84QPGwAzGxw/GzblrqR67iIo8e5reGbvXl
s1TFqKyvoXy9LJoGecMwjznAEulw9cYcFz+VuV5xnYPyJMLWk4Bo9WCVKGuAqVdw
-----END CERTIFICATE-----
subject=/C=BE/ST=Antwerp/L=Mechelen/O=NixSys BVBA/CN=svn.grep.be/emailAddress=wouter@grep.be
issuer=/C=BE/ST=Antwerp/L=Mechelen/O=NixSys BVBA/CN=svn.grep.be/emailAddress=wouter@grep.be
---
No client certificate CA names sent
---
SSL handshake has read 1428 bytes and written 316 bytes
---
New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA
Server public key is 1024 bit
Compression: NONE
Expansion: NONE
SSL-Session:
    Protocol  : TLSv1
    Cipher    : DHE-RSA-AES256-SHA
    Session-ID: 65E69139622D06B9D284AEDFBFC1969FE14E826FAD01FB45E51F1020B4CEA42C
    Session-ID-ctx: 
    Master-Key: 606553D558AF15491FEF6FD1A523E16D2E40A8A005A358DF9A756A21FC05DFAF2C9985ABE109DCD29DD5D77BE6BC5C4F
    Key-Arg   : None
    Start Time: 1222001082
    Timeout   : 300 (sec)
    Verify return code: 18 (self signed certificate)
---
HEAD / HTTP/1.1
Host: svn.grep.be
User-Agent: openssl s_client
Connection: close
HTTP/1.1 404 Not Found
Date: Sun, 21 Sep 2008 12:44:55 GMT
Server: Apache/2.2.3 (Debian) mod_auth_kerb/5.3 DAV/2 SVN/1.4.2 PHP/5.2.0-8+etch11 mod_ssl/2.2.3 OpenSSL/0.9.8c
Connection: close
Content-Type: text/html; charset=iso-8859-1
closed
wouter@country:~$ 
As you can see, we connect to an HTTPS server, get to see what the server's certificate looks like, issue some commands, and the server responds properly. It also works for (some) protocols who work in a STARTTLS kind of way:
wouter@country:~$ openssl s_client -host samba.grep.be -port 587 -starttls smtp
CONNECTED(00000003)
depth=0 /C=BE/ST=Antwerp/L=Mechelen/O=NixSys BVBA/CN=samba.grep.be
verify error:num=18:self signed certificate
verify return:1
depth=0 /C=BE/ST=Antwerp/L=Mechelen/O=NixSys BVBA/CN=samba.grep.be
verify return:1
---
Certificate chain
 0 s:/C=BE/ST=Antwerp/L=Mechelen/O=NixSys BVBA/CN=samba.grep.be
   i:/C=BE/ST=Antwerp/L=Mechelen/O=NixSys BVBA/CN=samba.grep.be
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIDBDCCAm2gAwIBAgIJAK53w+1YhWocMA0GCSqGSIb3DQEBBQUAMGAxCzAJBgNV
BAYTAkJFMRAwDgYDVQQIEwdBbnR3ZXJwMREwDwYDVQQHEwhNZWNoZWxlbjEUMBIG
A1UEChMLTml4U3lzIEJWQkExFjAUBgNVBAMTDXNhbWJhLmdyZXAuYmUwHhcNMDgw
OTIwMTYyMjI3WhcNMDkwOTIwMTYyMjI3WjBgMQswCQYDVQQGEwJCRTEQMA4GA1UE
CBMHQW50d2VycDERMA8GA1UEBxMITWVjaGVsZW4xFDASBgNVBAoTC05peFN5cyBC
VkJBMRYwFAYDVQQDEw1zYW1iYS5ncmVwLmJlMIGfMA0GCSqGSIb3DQEBAQUAA4GN
ADCBiQKBgQCee+Ibci3atTgoJqUU7cK13oD/E1IV2lKcvdviJBtr4rd1aRWfxcvD
PS00jRXGJ9AAM+EO2iuZv0Z5NFQkcF3Yia0yj6hvjQvlev1OWxaWuvWhRRLV/013
JL8cIrKYrlHqgHow60cgUt7kfSxq9kjkMTWLsGdqlE+Q7eelMN94tQIDAQABo4HF
MIHCMB0GA1UdDgQWBBT9N54b/zoiUNl2GnWYbDf6YeixgTCBkgYDVR0jBIGKMIGH
gBT9N54b/zoiUNl2GnWYbDf6YeixgaFkpGIwYDELMAkGA1UEBhMCQkUxEDAOBgNV
BAgTB0FudHdlcnAxETAPBgNVBAcTCE1lY2hlbGVuMRQwEgYDVQQKEwtOaXhTeXMg
QlZCQTEWMBQGA1UEAxMNc2FtYmEuZ3JlcC5iZYIJAK53w+1YhWocMAwGA1UdEwQF
MAMBAf8wDQYJKoZIhvcNAQEFBQADgYEAAnMdbAgLRJ3xWOBlqNjLDzGWAEzOJUHo
5R9ljMFPwt1WdjRy7L96ETdc0AquQsW31AJsDJDf+Ls4zka+++DrVWk4kCOC0FOO
40ar0WUfdOtuusdIFLDfHJgbzp0mBu125VBZ651Db99IX+0BuJLdtb8fz2LOOe8b
eN7obSZTguM=
-----END CERTIFICATE-----
subject=/C=BE/ST=Antwerp/L=Mechelen/O=NixSys BVBA/CN=samba.grep.be
issuer=/C=BE/ST=Antwerp/L=Mechelen/O=NixSys BVBA/CN=samba.grep.be
---
No client certificate CA names sent
---
SSL handshake has read 1707 bytes and written 351 bytes
---
New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA
Server public key is 1024 bit
Compression: NONE
Expansion: NONE
SSL-Session:
    Protocol  : TLSv1
    Cipher    : DHE-RSA-AES256-SHA
    Session-ID: 6D28368494A3879054143C7C6B926C9BDCDBA20F1E099BF4BA7E76FCF357FD55
    Session-ID-ctx: 
    Master-Key: B246EA50357EAA6C335B50B67AE8CE41635EBCA6EFF7EFCE082225C4EFF5CFBB2E50C07D8320E0EFCBFABDCDF8A9A851
    Key-Arg   : None
    Start Time: 1222000892
    Timeout   : 300 (sec)
    Verify return code: 18 (self signed certificate)
---
250 HELP
quit
221 samba.grep.be closing connection
closed
wouter@country:~$ 
OpenSSL here connects to the server, issues a proper EHLO command, does STARTTLS, and then gives me the same data as it did for the HTTPS connection. Isn't that nice.

Next.

Previous.